Goto

Collaborating Authors

 wrong hand


Microsoft's new AI tool that takes screenshots of your laptop every few seconds is dubbed a 'privacy nightmare' by experts

Daily Mail - Science & tech

Microsoft's latest AI-powered tool is giving your computer a'photographic memory' – but experts are concerned it could come at a cost to your privacy. The new tool, called'Recall', automatically takes screenshots of your laptop every few seconds that you can browse through later. Microsoft says the screenshots are stored locally on your computer and can't be accessed by the tech giant's staff, or any remote hacker. However, experts have shared concerns that it could be make it easier for people to get personal information from your device if it falls into the wrong hands. Dr Kris Shrishak, an adviser on AI and privacy, called the tool a potential'privacy nightmare'.


AI is going to listen to YOUR medical appointment! Health Secretary's new plan to free up doctors' time triggers outrage as critics slam 'creepy' idea and warn confidential medical info could end up in wrong hands

Daily Mail - Science & tech

AI will listen in to doctors' appointments and automatically generate patient notes in a bid to improve productivity in the NHS. Health Secretary Victoria Atkins said the plans will cut the time medics spend on admin so they are free to see more patients. But privacy campaigners today described the move as'creepy', while patient groups warned people could come to harm as they will be too embarrassed to discuss medical issues freely while being recorded. Chancellor Jeremy Hunt yesterday announced a 3.4billion investment in NHS productivity through things such as expanding the use of AI, reducing paperwork for medics and improving access for patients. In a major keynote speech at the Nuffield Trust think tank's annual summit, Ms Atkins today said the'enormous amount of money' would be transformative.


TechScape: Will Meta's open-source LLM make AI safer – or put it into the wrong hands?

The Guardian

The AI summer is well and truly upon us. Whether we call this period the peak of the "hype cycle" or simply the moment the curve goes vertical will only be obvious in hindsight, but the cadence of big news in the field has gone from weekly to almost daily. Let's catch up with what the biggest players in AI – Meta, Microsoft, Apple and OpenAI – are doing. Always one to keep its cards close to its chest, don't expect to hear of many R&D breakthroughs from Cupertino. Even the AI work that has made it into shipping products is hidden rather than shouted from the rooftops, with the company talking about "machine learning" and "transformers" at its annual worldwide developer conference (WWDC) last month, but conspicuously steering clear of saying "AI".


AI is already causing unintended harm. What happens when it falls into the wrong hands? David Evan Harris

The Guardian

A researcher was granted access earlier this year by Facebook's parent company, Meta, to incredibly potent artificial intelligence software – and leaked it to the world. As a former researcher on Meta's civic integrity and responsible AI teams, I am terrified by what could happen next. Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI – Meta's branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world. This could position Meta as owner of the centrepiece of the dominant AI platform, much in the same way that Google controls the open-source Android operating system that is built on and adapted by device manufacturers globally. If Meta were to secure this central position in the AI ecosystem, it would have leverage to shape the direction of AI at a fundamental level, controlling both the experiences of individual users and setting limits on what other companies could and couldn't do.


Artificial general intelligence in the wrong hands could do 'really dangerous stuff,' experts warn

FOX News

AGI, while powerful, could have negative consequences, warned Diveplane CEO Mike Capps and Liberty Blockchain CCO Christopher Alexander. Artificial general intelligence – the kind of AI that has capabilities similar to humans – may be far off and offer new opportunities, but experts warn it could be potentially dangerous, and have drastic implications for white-collar workers. "I'm about as excited about AGI as I am about nuclear fission," Diveplane CEO Dr. Michael Capps told Fox News Digital. "It's really amazing what we can do with it, it can power our society, but in the wrong hands, it can do some really dangerous stuff." While there is no one definition of AGI, a 2020 report from consulting giant McKinsey said such a machine would need to master human-like skills, such as fine motor skills and natural language processing.


I asked ChatGPT to write a Harry Potter fan fiction, the result will blow your mind.

#artificialintelligence

As a Harry Potter fan and a lover of writing, I was curious to see what would happen if I asked ChatGPT (Generative Pretrained Transformer) to write a Harry Potter fan fiction. So, I fed ChatGPT a few prompts and let it do its magic. The result was a piece of fan fiction titled "The Lost Diadem of Ravenclaw", which follows the story of Harry, Ron, and Hermione as they embark on a quest to find the lost diadem of Ravenclaw. The diadem, which is said to enhance the intelligence of its wearer, has been missing for centuries and is believed to be hidden in the Forbidden Forest. As they journey through the forest, the trio encounters a number of obstacles and challenges, including an encounter with a pack of werewolves and a showdown with the infamous Death Eater Bellatrix Lestrange. Despite the challenges they face, Harry, Ron, and Hermione persevere and eventually find the lost diadem.


Experts Warn Arms For Ukraine Could End Up In Wrong Hands

International Business Times

Western countries have been ramping up weapons and ammunition shipments to Ukraine as Kyiv fights off a Russian invasion, but arms trade experts warn some of the lethal assistance could end up falling into the wrong hands. Ukraine in particular has a history as a hub of the arms trade during the 1990s, setting off alarm bells for those who study illicit flows. "There are very significant risks associated to the proliferation of weapons in Ukraine at the moment, in particular regarding small arms and light weapons," said Nils Duquet, a researcher and director of the Flemish Peace Institute. Western nations, above all the US, have announced successive shipments of both light and heavy weapons for Kyiv's forces since Russian troops crossed the Ukrainian border on February 24. Washington alone has delivered or promised military gear including hundreds of Switchblade kamikaze drones, 7,000 assault rifles with 50 million rounds of ammunition, laser-guided missiles and radar systems to detect enemy drones and incoming artillery fire.


How artificial intelligence is revolutionising drug design

#artificialintelligence

Imagine you wanted to design a drug for a new disease, 'Disease X', about which little is known. Imagine then that you have a machine that could use all the available data in the world about Disease X to identify a potential mechanism of disease and use this to predict which molecules within this mechanism could make suitable targets for drugs against the disease. Then, a machine would virtually design a drug targeting these optimal molecules, building it bit by bit and continuously checking with the target's structure to ensure activity at the desired binding site. Once the drug was "built", it could then be synthesised and, following various rounds of in vitro, in vivo, and clinical testing to validate its efficacy, the drug could be used in clinical practice. Although a machine like this does not yet exist, advocates of artificial intelligence (AI) propose that AI has the potential to revolutionise drug design, turning this imaginary scenario -- at least in part -- into a reality.


Deep Instinct BrandVoice: What Happens When AI Falls Into The Wrong Hands?

#artificialintelligence

Artificial intelligence (AI) is one of the most discussed technology fields today – and for good reason. AI will soon impact nearly every aspect of our lives and we have only just begun scratching the surface of AI's true potential. With AI, we are deepening our knowledge of human genetics and delivering leaps in medicine, deploying self-driving vehicles and robots for an array of industries, and combating fraud and cybercrime, to name just a few of the growing list of applications. However, as with any nascent technology, AI has the potential to cause harm when placed in the wrong hands. We've begun seeing AI used for nefarious purposes, chiefly in the form of AI-facilitated Cyberattacks, and forecast Adversarial AI to be the next challenge to be faced in this area.


Google fires top AI ethics researcher Margaret Mitchell – TechCrunch

#artificialintelligence

Google has fired Margaret Mitchell, the founder and former co-lead of the company's ethical AI team. Mitchell announced the news via a tweet. Google confirmed Mitchell's firing in a statement to TechCrunch; Google said: After conducting a review of this manager's conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees. In January, Google revoked corporate access from AI ethicist Margaret Mitchell for reportedly using automated scripts to find examples of mistreatment of Dr. Timnit Gebru, according to Axios. Gebru says she was fired from Google while Google has maintained that she resigned.